13 research outputs found

    Ambulance Emergency Response Optimization in Developing Countries

    Full text link
    The lack of emergency medical transportation is viewed as the main barrier to the access of emergency medical care in low and middle-income countries (LMICs). In this paper, we present a robust optimization approach to optimize both the location and routing of emergency response vehicles, accounting for uncertainty in travel times and spatial demand characteristic of LMICs. We traveled to Dhaka, Bangladesh, the sixth largest and third most densely populated city in the world, to conduct field research resulting in the collection of two unique datasets that inform our approach. This data is leveraged to develop machine learning methodologies to estimate demand for emergency medical services in a LMIC setting and to predict the travel time between any two locations in the road network for different times of day and days of the week. We combine our robust optimization and machine learning frameworks with real data to provide an in-depth investigation into three policy-related questions. First, we demonstrate that outpost locations optimized for weekday rush hour lead to good performance for all times of day and days of the week. Second, we find that significant improvements in emergency response times can be achieved by re-locating a small number of outposts and that the performance of the current system could be replicated using only 30% of the resources. Lastly, we show that a fleet of small motorcycle-based ambulances has the potential to significantly outperform traditional ambulance vans. In particular, they are able to capture three times more demand while reducing the median response time by 42% due to increased routing flexibility offered by nimble vehicles on a larger road network. Our results provide practical insights for emergency response optimization that can be leveraged by hospital-based and private ambulance providers in Dhaka and other urban centers in LMICs

    Vertical Distribution and Migration Patterns of Nautilus pompilius

    Get PDF
    Vertical depth migrations into shallower waters at night by the chambered cephalopod Nautilus were first hypothesized early in the early 20th Century. Subsequent studies have supported the hypothesis that Nautilus spend daytime hours at depth and only ascend to around 200 m at night. Here we challenge this idea of a universal Nautilus behavior. Ultrasonic telemetry techniques were employed to track eleven specimens of Nautilus pompilius for variable times ranging from one to 78 days at Osprey Reef, Coral Sea, Australia. To supplement these observations, six remotely operated vehicle (ROV) dives were conducted at the same location to provide 29 hours of observations from 100 to 800 meter depths which sighted an additional 48 individuals, including five juveniles, all deeper than 489 m. The resulting data suggest virtually continuous, nightly movement between depths of 130 to 700 m, with daytime behavior split between either virtual stasis in the relatively shallow 160–225 m depths or active foraging in depths between 489 to 700 m. The findings also extend the known habitable depth range of Nautilus to 700 m, demonstrate juvenile distribution within the same habitat as adults and document daytime feeding behavior. These data support a hypothesis that, contrary to previously observed diurnal patterns of shallower at night than day, more complex vertical movement patterns may exist in at least this, and perhaps all other Nautilus populations. These are most likely dictated by optimal feeding substrate, avoidance of daytime visual predators, requirements for resting periods at 200 m to regain neutral buoyancy, upper temperature limits of around 25°C and implosion depths of 800 m. The slope, terrain and biological community of the various geographically separated Nautilus populations may provide different permutations and combinations of the above factors resulting in preferred vertical movement strategies most suited for each population

    Improving Tuberculosis Treatment Adherence Support: The Case for Targeted Behavioral Interventions

    No full text
    Problem definition: Lack of patient adherence to treatment protocols is a main barrier to reducing the global disease burden of tuberculosis (TB). We study the operational design of a treatment adherence support (TAS) platform that requires patients to verify their treatment adherence on a daily basis. Academic/practical relevance: Experimental results on the effectiveness of TAS programs have been mixed; and rigorous research is needed on how to structure these motivational programs, particularly in resource-limited settings. Our analysis establishes that patient engagement can be increased by personal sponsor outreach and that patient behavior data can be used to identify at-risk patients for targeted outreach. Methodology: We partner with a TB TAS provider and use data from a completed randomized controlled trial. We use administrative variation in the timing of peer sponsor outreach to evaluate the impact of personal messages on subsequent patient verification behavior. We then develop a rolling-horizon machine learning (ML) framework to generate dynamic risk predictions for patients enrolled on the platform. Results: We find that, on average, sponsor outreach to patients increases the odds ratio of next-day treatment adherence verification by 35%. Furthermore, patients’ prior verification behavior can be used to accurately predict short-term (treatment adherence verification) and long-term (successful treatment completion) outcomes. These results allow the provider to target and implement behavioral interventions to at-risk patients. Managerial implications: Our results indicate that, compared with a benchmark policy, the TAS platform could reach the same number of at-risk patients with 6%–40% less capacity, or reach 2%–20% more at-risk patients with the same capacity, by using various ML-based prioritization policies that leverage patient engagement data. Personal sponsor outreach to all patients is likely to be very costly, so targeted TAS may substantially improve the cost-effectiveness of TAS programs. </jats:p

    Planning a Community Approach to Diabetes Care in Low- and Middle-Income Countries Using Optimization

    Full text link
    Diabetes is a global health priority, especially in low- and-middle-income countries, where over 50% of premature deaths are attributed to high blood glucose. Several studies have demonstrated the feasibility of using Community Health Worker (CHW) programs to provide affordable and culturally tailored solutions for early detection and management of diabetes. Yet, scalable models to design and implement CHW programs while accounting for screening, management, and patient enrollment decisions have not been proposed. We introduce an optimization framework to determine personalized CHW visits that maximize glycemic control at a community-level. Our framework explicitly models the trade-off between screening new patients and providing management visits to individuals who are already enrolled in treatment. We account for patients' motivational states, which affect their decisions to enroll or drop out of treatment and, therefore, the effectiveness of the intervention. We incorporate these decisions by modeling patients as utility-maximizing agents within a bi-level provider problem that we solve using approximate dynamic programming. By estimating patients' health and motivational states, our model builds visit plans that account for patients' tradeoffs when deciding to enroll in treatment, leading to reduced dropout rates and improved resource allocation. We apply our approach to generate CHW visit plans using operational data from a social enterprise serving low-income neighborhoods in urban areas of India. Through extensive simulation experiments, we find that our framework requires up to 73.4% less capacity than the best naive policy to achieve the same performance in terms of glycemic control. Our experiments also show that our solution algorithm can improve upon naive policies by up to 124.5% using the same CHW capacity.Comment: 47 pages, 11 figure

    Sample size requirements for knowledge-based treatment planning

    No full text
    Purpose: To determine how training set size affects the accuracy of knowledge-based treatment planning (KBP) models. Methods: The authors selected four models from three classes of KBP approaches, corresponding to three distinct quantities that KBP models may predict: dose–volume histogram (DVH) points, DVH curves, and objective function weights. DVH point prediction is done using the best plan from a database of similar clinical plans; DVH curve prediction employs principal component analysis and multiple linear regression; and objective function weights uses either logistic regression or K-nearest neighbors. The authors trained each KBP model using training sets of sizes n = 10, 20, 30, 50, 75, 100, 150, and 200. The authors set aside 100 randomly selected patients from their cohort of 315 prostate cancer patients from Princess Margaret Cancer Center to serve as a validation set for all experiments. For each value of n, the authors randomly selected 100 different training sets with replacement from the remaining 215 patients. Each of the 100 training sets was used to train a model for each value of n and for each KBT approach. To evaluate the models, the authors predicted the KBP endpoints for each of the 100 patients in the validation set. To estimate the minimum required sample size, the authors used statistical testing to determine if the median error for each sample size from 10 to 150 is equal to the median error for the maximum sample size of 200. Results: The minimum required sample size was different for each model. The DVH point prediction method predicts two dose metrics for the bladder and two for the rectum. The authors found that more than 200 samples were required to achieve consistent model predictions for all four metrics. For DVH curve prediction, the authors found that at least 75 samples were needed to accurately predict the bladder DVH, while only 20 samples were needed to predict the rectum DVH. Finally, for objective function weight prediction, at least 10 samples were needed to train the logistic regression model, while at least 150 samples were required to train the K-nearest neighbor methodology. Conclusions: In conclusion, the minimum required sample size needed to accurately train KBP models for prostate cancer depends on the specific model and endpoint to be predicted. The authors' results may provide a lower bound for more complicated tumor sites

    Models for predicting objective function weights in prostate cancer IMRT

    No full text
    Purpose: To develop and evaluate the clinical applicability of advanced machine learning models that simultaneously predict multiple optimization objective function weights from patient geometry for intensity-modulated radiation therapy of prostate cancer. Methods: A previously developed inverse optimization method was applied retrospectively to determine optimal objective function weights for 315 treated patients. The authors used an overlap volume ratio (OV) of bladder and rectum for different PTV expansions and overlap volume histogram slopes (OVSR and OVSB for the rectum and bladder, respectively) as explanatory variables that quantify patient geometry. Using the optimal weights as ground truth, the authors trained and applied three prediction models: logistic regression (LR), multinomial logistic regression (MLR), and weighted K-nearest neighbor (KNN). The population average of the optimal objective function weights was also calculated. Results: The OV at 0.4 cm and OVSR at 0.1 cm features were found to be the most predictive of the weights. The authors observed comparable performance (i.e., no statistically significant difference) between LR, MLR, and KNN methodologies, with LR appearing to perform the best. All three machine learning models outperformed the population average by a statistically significant amount over a range of clinical metrics including bladder/rectum V53Gy, bladder/rectum V70Gy, and dose to the bladder, rectum, CTV, and PTV. When comparing the weights directly, the LR model predicted bladder and rectum weights that had, on average, a 73% and 74% relative improvement over the population average weights, respectively. The treatment plans resulting from the LR weights had, on average, a rectum V70Gy that was 35% closer to the clinical plan and a bladder V70Gy that was 29% closer, compared to the population average weights. Similar results were observed for all other clinical metrics. Conclusions: The authors demonstrated that the KNN and MLR weight prediction methodologies perform comparably to the LR model and can produce clinical quality treatment plans by simultaneously predicting multiple weights that capture trade-offs associated with sparing multiple OARs

    Ambulance Emergency Response Optimization in Developing Countries

    No full text
    corecore